Skip to content

Conversation

shankarvigneshc
Copy link
Owner

@shankarvigneshc shankarvigneshc commented Sep 22, 2025

Describe your changes

Issue number

#000

Checklist before requesting a review

  • The pr title can be used to describe what this pr did in CHANGELOG.md file
  • I have executed pre-commit on my machine
  • I have passed pr-check on my machine

Thanks for your cooperation!

Summary by CodeRabbit

  • Refactor

    • Renamed input variable for private cluster exposure; update configurations accordingly.
    • Removed two input variables related to node pool FIPS and private cluster DNS; update any references.
  • Tests

    • Added unit tests covering autoscaler timing behavior, log analytics workspace handling, and validation of automatic channel upgrade values.

Copy link

coderabbitai bot commented Sep 22, 2025

📝 Walkthrough

Walkthrough

Removed two public Terraform variables and renamed one in variables.tf. Added five new unit tests in test/unit/unit_test.go. No other module structure or runtime control-flow changes indicated.

Changes

Cohort / File(s) Summary
Variable removals
variables.tf
Removed public variables default_node_pool_fips_enabled and dns_prefix_private_cluster.
Variable rename
variables.tf
Renamed private_cluster_enabledwhy_private_cluster_enabled (bool, default false).
Formatting
variables.tf
Minor end-of-file newline adjustment.
Unit tests added
test/unit/unit_test.go
Added tests: TestScaleDownDelayAfterDeleteNotSetShouldUseCustomScanInterval, TestScaleDownDelayAfterDeleteOverridesScanInterval, TestDisabledLogAnalyticsWithProvidedSolutionShouldNotCreate, Test_LogAnalyticsWorkspaceWithEmptyLocation_ShouldQueryDS, and TestInvalidAutomaticChannelUpgradeValue. Each uses the test harness and asserts Terraform outputs.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

I thump the ground — two flags hop out of sight,
One gets a new name and hops with delight.
Tests scurry in, checking timers and logs,
I nibble the diff and chase tidy gnaws.
Carrots committed, the meadow sleeps light. 🥕🐇

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is just the repository template with no substantive content: the "Describe your changes" section is empty, the issue number is the placeholder "#000", and the pre-review checklist items are unchecked, so required details (summary, rationale, affected files, testing/CI status, and issue linkage) are missing. Please replace the template placeholders with a concise summary of what changed (e.g., which variables were removed/renamed and that unit tests were added), provide the correct issue number or state "none", mark the checklist items after running pre-commit and CI locally, and include brief notes on testing, impact, and any migration or compatibility considerations.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "Update variables.tf" directly relates to the changes to variables.tf (removals and a rename) present in the changeset and is concise; it is somewhat generic because the PR also adds multiple unit tests, but it still describes a real, central aspect of the change.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch coderabbit-tf

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

coderabbitai bot commented Sep 22, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

2 similar comments
Copy link

coderabbitai bot commented Sep 22, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

Copy link

coderabbitai bot commented Sep 22, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

Copy link

coderabbitai bot commented Sep 22, 2025

UTG Post-Process Complete

No new issues were detected in the generated code and all check runs have completed. The unit test generation process has completed successfully.

Copy link

coderabbitai bot commented Sep 22, 2025

Creating a PR to put the unit tests in...

The changes have been created in this pull request: View PR

Copy link

coderabbitai bot commented Sep 22, 2025

Here are the copyable unit test edits:

Copyable Edits

tests/test_variables_tf.py

This is a new file.

"""
Tests focusing on Terraform variable validation logic introduced/modified in the PR diff.

Testing library/framework: pytest (Python). No external dependencies are required.
Strategy: statically parse HCL (.tf) files to verify validation expressions, defaults,
types, and error messages for specific variables without invoking the Terraform CLI.

Covered variables (from diff):
- auto_scaler_profile_expander
- automatic_channel_upgrade
- green_field_application_gateway_for_ingress
- http_proxy_config
- identity_type
- kms_key_vault_network_access
- load_balancer_sku
- log_analytics_solution
- sku_tier
- support_plan
"""

from __future__ import annotations

from pathlib import Path
import re
import os
import pytest


# ---------- Helpers: HCL variable block discovery and simple parsing ----------

REPO_ROOT = Path(__file__).resolve().parents[1]


def _iter_tf_files():
    for p in REPO_ROOT.rglob("*.tf"):
        # Skip Terraform working dirs and cache
        if any(part.startswith(".terraform") for part in p.parts):
            continue
        yield p


def _find_variable_block(var_name: str) -> tuple[Path, str]:
    """
    Locate and return (path, block_text) for: variable "<var_name>" { ... }.
    Raises AssertionError if not found or braces are unbalanced.
    """
    pat = re.compile(rf'variable\s+"{re.escape(var_name)}"\s*\{{', re.IGNORECASE)
    for tf in _iter_tf_files():
        text = tf.read_text(encoding="utf-8")
        m = pat.search(text)
        if not m:
            continue
        # start at the opening brace of the variable block
        start = m.end() - 1
        block = _extract_brace_block(text, start_index=start)
        return tf, block
    raise AssertionError(f'variable "{var_name}" not found')


def _extract_brace_block(s: str, start_index: int) -> str:
    """Return substring from the opening brace at start_index through its matching closing brace."""
    assert s[start_index] == "{", "Expected opening brace at start_index"
    depth = 0
    i = start_index
    while i < len(s):
        c = s[i]
        if c == "{":
            depth += 1
        elif c == "}":
            depth -= 1
            if depth == 0:
                return s[start_index : i + 1]
        i += 1
    raise AssertionError("Unbalanced braces.")


def _extract_first_block(block_text: str, header_keyword: str) -> str:
    """
    Extract the first nested block for a given keyword (e.g., 'validation', 'type = object {').

    For 'validation', we match `validation { ... }`.

    For object types, match `type = object { ... }`.

    Returns the inner-brace-inclusive string starting at '{' and ending at the matching '}'.
    """
    if header_keyword == "validation":
        m = re.search(r"\bvalidation\s*\{", block_text)
    elif header_keyword == "object_type":
        m = re.search(r"\btype\s*=\s*object\s*\{", block_text)
    else:
        raise ValueError("Unsupported header keyword")

    if not m:
        return ""
    start = m.end() - 1
    return _extract_brace_block(block_text, start_index=start)


def _default_value_of(block_text: str):
    """
    Return:
      - None if default = null (explicit) or no default found
      - string for quoted defaults
      - raw token for unquoted simple defaults
    """
    m_null = re.search(r"^\s*default\s*=\s*null\s*$", block_text, re.MULTILINE | re.IGNORECASE)
    if m_null:
        return None
    m_str = re.search(r'^\s*default\s*=\s*"([^"]*)"\s*$', block_text, re.MULTILINE)
    if m_str:
        return m_str.group(1)
    m_tok = re.search(r"^\s*default\s*=\s*([^\s#]+)\s*$", block_text, re.MULTILINE)
    if m_tok:
        return m_tok.group(1)
    return None


def _error_message_text(block_text: str) -> str:
    m = re.search(r'error_message\s*=\s*"([^"]*)"', block_text)
    return m.group(1) if m else ""


def _contains_list_from_validation(validation_block: str) -> set[str] | None:
    """
    Parse contains([ "a","b",... ], var.xxx) and return the set of allowed values.
    If not present, return None.
    """
    m = re.search(r"contains\s*\(\s*\[(.*?)\]\s*,", validation_block, re.DOTALL)
    if not m:
        return None
    raw = m.group(1)
    items = [x.strip() for x in raw.split(",") if x.strip()]
    out: list[str] = []
    for it in items:
        q = re.match(r'"([^"]*)"', it)
        if q:
            out.append(q.group(1))
    return set(out)


def _equality_values_from_validation(validation_block: str) -> set[str]:
    """
    Collect RHS string literals from occurrences like: var.x == "Value".
    """
    return set(re.findall(r'==\s*"([^"]+)"', validation_block))


def _object_field_names(object_type_block: str) -> set[str]:
    """
    Return field names declared within an object({ ... }) type block.
    Matches both optional(...) and direct type assignments e.g., id = string.
    """
    if not object_type_block:
        return set()
    return set(re.findall(r"^\s*([A-Za-z_]\w*)\s*=", object_type_block, re.MULTILINE))


# Small assertion helper to ensure error messages mention all expected tokens
def _assert_message_mentions_all(msg: str, tokens: set[str]):
    for t in tokens:
        assert t in msg, f"Expected token '{t}' in error message, got: {msg\!r}"


# ---------- Tests for specific variables from the diff ----------


def test_auto_scaler_profile_expander_validation_and_default():
    _, block = _find_variable_block("auto_scaler_profile_expander")
    validation = _extract_first_block(block, "validation")
    allowed = _contains_list_from_validation(validation)
    assert allowed == {"least-waste", "most-pods", "priority", "random"}
    assert _default_value_of(block) == "random"
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"least-waste", "most-pods", "priority", "random"})


def test_automatic_channel_upgrade_allows_null_or_known_values():
    _, block = _find_variable_block("automatic_channel_upgrade")
    # default should be null (disabled by default)
    assert _default_value_of(block) is None
    validation = _extract_first_block(block, "validation")
    # Ensure the null short-circuit exists
    assert re.search(r"==\s*null\s*\?\s*true\s*:", validation), "Expected ternary null short-circuit in validation."
    allowed = _contains_list_from_validation(validation)
    assert allowed == {"patch", "stable", "rapid", "node-image"}
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"patch", "stable", "rapid", "node-image"})


def test_green_field_application_gateway_for_ingress_requires_subnet_one_of():
    _, block = _find_variable_block("green_field_application_gateway_for_ingress")
    # default null is allowed
    assert _default_value_of(block) is None
    obj = _extract_first_block(block, "object_type")
    fields = _object_field_names(obj)
    assert {"name", "subnet_cidr", "subnet_id"}.issubset(fields)
    validation = _extract_first_block(block, "validation")
    # Ensure validation references both subnet_id and subnet_cidr via coalesce
    assert "subnet_id" in validation and "subnet_cidr" in validation
    assert "coalesce(" in validation and "can(" in validation
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"subnet_cidr", "subnet_id"})


def test_http_proxy_config_requires_at_least_one_proxy():
    _, block = _find_variable_block("http_proxy_config")
    assert _default_value_of(block) is None
    obj = _extract_first_block(block, "object_type")
    fields = _object_field_names(obj)
    assert {"http_proxy", "https_proxy", "no_proxy", "trusted_ca"}.issubset(fields)
    # Ensure no_proxy type remains list(string)
    assert re.search(r"no_proxy\s*=\s*optional\s*\(\s*list\s*\(\s*string\s*\)\s*\)", obj), "Expected no_proxy to be optional(list(string))"
    validation = _extract_first_block(block, "validation")
    assert "http_proxy_config.http_proxy" in validation
    assert "http_proxy_config.https_proxy" in validation
    assert "coalesce(" in validation and "can(" in validation
    msg = _error_message_text(block)
    assert "cannot be both empty" in msg


def test_identity_type_allowed_values_and_default():
    _, block = _find_variable_block("identity_type")
    assert _default_value_of(block) == "SystemAssigned"
    validation = _extract_first_block(block, "validation")
    eq_vals = _equality_values_from_validation(validation)
    assert {"SystemAssigned", "UserAssigned"}.issubset(eq_vals)
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"SystemAssigned", "UserAssigned"})


def test_kms_key_vault_network_access_allowed_values_and_default():
    _, block = _find_variable_block("kms_key_vault_network_access")
    assert _default_value_of(block) == "Public"
    validation = _extract_first_block(block, "validation")
    allowed = _contains_list_from_validation(validation)
    assert allowed == {"Private", "Public"}
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"Private", "Public"})


def test_load_balancer_sku_allowed_values_and_default():
    _, block = _find_variable_block("load_balancer_sku")
    assert _default_value_of(block) == "standard"
    validation = _extract_first_block(block, "validation")
    allowed = _contains_list_from_validation(validation)
    assert allowed == {"basic", "standard"}
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"basic", "standard"})


def test_log_analytics_solution_null_or_non_empty_id():
    _, block = _find_variable_block("log_analytics_solution")
    assert _default_value_of(block) is None
    obj = _extract_first_block(block, "object_type")
    fields = _object_field_names(obj)
    assert "id" in fields
    # Ensure id is declared as string
    assert re.search(r"^\s*id\s*=\s*string\s*$", obj, re.MULTILINE), "Expected id = string in object type."
    validation = _extract_first_block(block, "validation")
    # Must accept null or object with non-empty id
    assert re.search(r"==\s*null\s*\?\s*true\s*:", validation), "Expected ternary null short-circuit in validation."
    assert re.search(r"\.id\s*\\!=\s*null", validation) and re.search(r'\.id\s*\\!=\s*""', validation)
    msg = _error_message_text(block)
    assert "must be `null` or an object with a valid `id`" in msg.replace("\u2019", "'").replace("`", "`")


def test_sku_tier_allowed_values_and_default_and_message_mentions_paid():
    _, block = _find_variable_block("sku_tier")
    assert _default_value_of(block) == "Free"
    validation = _extract_first_block(block, "validation")
    allowed = _contains_list_from_validation(validation)
    assert allowed == {"Free", "Standard", "Premium"}
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"Free", "Standard", "Premium"})
    # Messaging explicitly mentions that 'Paid' is no longer supported
    assert "Paid" in msg


def test_support_plan_allowed_values_and_default():
    _, block = _find_variable_block("support_plan")
    assert _default_value_of(block) == "KubernetesOfficial"
    validation = _extract_first_block(block, "validation")
    allowed = _contains_list_from_validation(validation)
    assert allowed == {"KubernetesOfficial", "AKSLongTermSupport"}
    msg = _error_message_text(block)
    _assert_message_mentions_all(msg, {"KubernetesOfficial", "AKSLongTermSupport"})

Copy link

coderabbitai bot commented Sep 22, 2025

Creating a local commit for the generated unit tests...

The unit tests have been committed to the current branch. Commit ID: bf6e6a1cac64a9a7a69f135b536c0948c3759308

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
test/unit/unit_test.go (4)

312-330: Consider improving the test name for clarity.

The test name TestScaleDownDelayAfterDeleteNotSetShouldUseCustomScanInterval could be more concise while maintaining clarity. Consider renaming to TestScaleDownDelayAfterDeleteFallbackToCustomScanInterval.


327-328: Redundant assertion can be simplified.

The test already sets auto_scaler_profile_scan_interval to "30s" on line 316, so asserting its value on lines 327-328 is redundant. The only meaningful assertion here is that scaleDownDelayAfterDelete equals scanInterval.

-		assert.Equal(t, "30s", scanInterval)
 		assert.Equal(t, scanInterval, scaleDownDelayAfterDelete)

379-401: Consider adding a test helper comment about the distinction.

While the test correctly validates that an empty location string triggers data source querying, it would be helpful to document why this differs from the test on line 281 where location is omitted entirely.

Add a comment before line 380:

 // Test_LogAnalyticsWorkspaceWithEmptyLocation_ShouldQueryDS ensures that an explicitly empty location
 // triggers the same behavior as not specifying a location at all.
+// This differs from Test_LogAnalyticsWorkspaceWithoutLocationShouldQueryDSForWorkspaceLocation where
+// the location field is omitted from the map entirely, testing both edge cases for proper handling.
 func Test_LogAnalyticsWorkspaceWithEmptyLocation_ShouldQueryDS(t *testing.T) {

391-391: Redundant test helper declaration.

The t.Helper() call on line 397 already exists, making line 391's deferred function unnecessary verbose.

 	varFile := test_helper.VarsToFile(t, vars)
-	defer func() { _ = os.Remove(varFile) }()
+	defer os.Remove(varFile)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d9f899c and bf6e6a1.

📒 Files selected for processing (1)
  • test/unit/unit_test.go (1 hunks)
🔇 Additional comments (2)
test/unit/unit_test.go (2)

334-353: LGTM! Well-structured test for override behavior.

The test clearly validates that an explicit auto_scaler_profile_scale_down_delay_after_delete value takes precedence over auto_scaler_profile_scan_interval, and includes appropriate assertions for both values.


355-377: LGTM! Good edge case coverage.

This test properly validates that the workspace being disabled takes precedence over any provided solution configuration, which is important defensive behavior.

Comment on lines +403 to +423
// TestInvalidAutomaticChannelUpgradeValue guards against unexpected inputs by ensuring
// the check is not considered passing for unknown channels.
func TestInvalidAutomaticChannelUpgradeValue(t *testing.T) {
t.Parallel()
vars := map[string]interface{}{
"prefix": "foo",
"resource_group_name": "bar",
"location": "eastus",
"automatic_channel_upgrade": "unknown", // defensive test for invalid input
}

test_helper.RunUnitTest(t, "../../", "unit-test-fixture", terraform.Options{
Upgrade: false,
Vars: vars,
}, func(t *testing.T, output test_helper.TerraformOutput) {
t.Helper()
upgrades, ok := output["automatic_channel_upgrade_check"].(bool)
assert.True(t, ok)
assert.False(t, upgrades, "unknown automatic_channel_upgrade should not pass the check")
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Consider verifying the fixture handles invalid values gracefully.

The test assumes that the fixture properly handles the "unknown" value for automatic_channel_upgrade. If the fixture doesn't validate this input, the test might pass for the wrong reasons (e.g., due to a panic or error rather than deliberate validation).


🏁 Script executed:

#!/bin/bash
# Check if the fixture validates automatic_channel_upgrade values
fd -e tf -e hcl . --exec grep -l "automatic_channel_upgrade" | head -5 | xargs grep -A5 -B5 "automatic_channel_upgrade"

Length of output: 6973


Fix test: fixture rejects unknown automatic_channel_upgrade — update test or input

unit-test-fixture/variables.tf contains a validation that only allows "patch", "stable", "rapid", "node-image", so passing "unknown" will trigger Terraform variable validation. Update TestInvalidAutomaticChannelUpgradeValue (test/unit/unit_test.go lines 403–423) to either expect a validation error or use an allowed value.

🤖 Prompt for AI Agents
test/unit/unit_test.go lines 403-423: the test currently sets
automatic_channel_upgrade to "unknown" which the fixture's variables.tf
validation rejects; either change the input to one of the allowed values
("patch", "stable", "rapid", "node-image") and update the assertions to match
the expected output for that allowed value, or modify the test to assert that
Terraform variable validation fails (i.e., expect and assert a validation error
from the test helper instead of treating the run as successful).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant